49 research outputs found

    Classifying motor imagery in presence of speech

    Get PDF
    In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    A Demonstration of Continuous Interaction with Elckerlyc

    Get PDF
    We discuss behavior planning in the style of the SAIBA framework for continuous (as opposed to turn-based) interaction. Such interaction requires the real-time application of minor shape or timing modifications of running behavior and anticipation of behavior of a (human) interaction partner. We discuss how behavior (re)planning and on-the-fly parameter modification fit into the current SAIBA framework, and what type of language or architecture extensions might be necessary. Our BML realizer Elckerlyc provides flexible mechanisms for both the specification and the execution of modifications to running behavior. We show how these mechanisms are used in a virtual trainer and two turn taking scenarios

    A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems

    Get PDF
    We propose a new approach to developing a tractable affective dialogue model for general probabilistic frame-based dialogue systems. The dialogue model, based on the Partially Observable Markov Decision Process (POMDP) and the Dynamic Decision Network (DDN) techniques, is composed of two main parts, the slot level dialogue manager and the global dialogue manager. Our implemented dialogue manager prototype can handle hundreds of slots; each slot might have many values. A first evaluation of the slot level dialogue manager (1-slot case) showed that with a 95% confidence level the DDN-POMDP dialogue strategy outperforms three simple handcrafted dialogue strategies when the user's action error is induced by stress

    Layering techniques for development of parallel systems:An algebraic approach

    Get PDF

    A POMDP approach to Affective Dialogue Modeling

    Get PDF
    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's affective state and action. A simple example of route navigation is explained to clarify our approach. The preliminary results showed that: (1) the expected return of the optimal dialogue strategy depends on the correlation between the user's affective state & the user's action and (2) the POMDP dialogue strategy outperforms five other dialogue strategies (the random, three handcrafted and greedy action selection strategies)

    Modular Completeness for Communication Closed Layers

    Get PDF
    The Communication Closed Layers law is shown to be modular complete for a model related to that of Mazurkiewicz. It is shown that in a modular style of program development the CCL rule cannot be derived from simpler ones. Within a non-modular set-up the CCL rule can be derived however from a simpler independence rule and an analog of the expansion rule for process algebras.\ud Part of this work has been supported by Esprit/BRA Project 6021 (REACT)

    Leading and following with a virtual trainer

    Get PDF
    This paper describes experiments with a virtual fitness trainer capable of mutually coordinated interaction. The virtual human co-exercises along with the user, leading as well as following in tempo, to motivate the user and to influence the speed with which the user performs the exercises. In a series of three experiments (20 participants in total) we attempted to influence the users' performance by manipulating the (timing of the) exercise behavior of the virtual trainer. The results show that it is possible to do this implicitly, using only micro adjustments to its bodily behavior. As such, the system is a rst step in the direction of mutually coordinated bodily interaction for virtual humans
    corecore